# Multitask Text Generation

Llama 3.1 8B SuperNova EtherealHermes GGUF
Apache-2.0
An 8B-parameter large language model based on the Llama-3.1 architecture, offering multiple quantized versions in GGUF format
Large Language Model English
L
tensorblock
44
1
Llama 3.1 8b DodoWild V2.01
An 8B-parameter language model based on the Llama 3.1 architecture, created by merging multiple models with mergekit, capable of text generation
Large Language Model Transformers
L
Nexesenex
58
2
Qwen2.5 Dyanka 7B Preview
Apache-2.0
A 7B-parameter language model based on the Qwen2.5 architecture, created by fusing multiple pre-trained models using the TIES method
Large Language Model Transformers
Q
Xiaojian9992024
1,497
8
Li 14b V0.4 Slerp0.1
This is a 14B-parameter large language model merged using the SLERP method, combining two base models: li-14b-v0.4 and miscii-14b-0218.
Large Language Model Transformers
L
wanlige
70
7
Phi 4 Model Stock V2
Phi-4-Model-Stock-v2 is a large language model merged from multiple Phi-4 variant models using the model_stock merging method, demonstrating strong performance across multiple benchmarks.
Large Language Model Transformers
P
bunnycore
56
2
Dolphin3.0 Llama3.2 1B GGUF
A 1B-parameter quantized model based on Llama3.2 architecture, supporting text generation tasks with multiple quantization version options
Large Language Model English
D
bartowski
1,134
4
Llm Jp 3 13b
Apache-2.0
A large language model developed by Japan's National Institute of Informatics, supporting Japanese and English, based on Transformer architecture with 13 billion parameters
Large Language Model Transformers Supports Multiple Languages
L
llm-jp
1,190
13
Buddyglass V0.3 Xortron7MethedUpSwitchedUp
A merged model based on multiple 8B-parameter Llama-3.1 models, optimized using the model_stock method
Large Language Model Transformers
B
darkc0de
15
5
UNA ThePitbull 21.4B V2
UNA-ThePitbull-21.4B-v2 is a large language model based on 21.4B parameters, with performance close to 70B models, integrating emotional intelligence and IQ, excelling in dialogue and text generation.
Large Language Model Transformers
U
fblgit
16
16
Llama 3 Stinky V2 8B
Other
This is an 8B-parameter model based on the Llama-3 architecture, merged using the mergekit tool, with strong text generation capabilities.
Large Language Model Transformers
L
nbeerbower
39
5
Neuralstar AlphaWriter 4x7b
Apache-2.0
NeuralStar_AlphaWriter_4x7b is a 7B-parameter language model built on Mixture of Experts (MoE) technology, specifically optimized for creative writing tasks by integrating four expert models specialized in different writing domains.
Large Language Model Transformers
N
OmnicromsBrain
21
10
Gemma Portuguese Luana 2b
Apache-2.0
This is a 2B-parameter Portuguese large language model based on the Gemma architecture, specifically optimized for Brazilian Portuguese, supporting instruction-following and text generation tasks.
Large Language Model Transformers Other
G
rhaymison
115
4
Strangemerges 17 7B Dare Ties
Apache-2.0
StrangeMerges_17-7B-dare_ties is the result of merging two models, Gille/StrangeMerges_16-7B-slerp and Gille/StrangeMerges_12-7B-slerp, using the dare_ties merging method via LazyMergekit.
Large Language Model Transformers
S
Gille
20
1
Blurdus 7b V0.1
Apache-2.0
Blurdus-7b-v0.1 is a hybrid model obtained by merging three 7B-parameter models using LazyMergekit, demonstrating excellent performance across multiple benchmarks.
Large Language Model Transformers
B
gate369
80
1
Stealth V1.3
Apache-2.0
Stealth-v1.3 is an open-source large language model developed by Jan, supporting offline operation on local devices to ensure user privacy.
Large Language Model Transformers English
S
jan-hq
80
7
Supermario Slerp V2
Apache-2.0
supermario-slerp-v2 is a text generation model created by merging two 7B-parameter models using the SLERP method, demonstrating outstanding performance across multiple benchmarks.
Large Language Model Transformers English
S
jan-hq
15
2
Supermario V2
Apache-2.0
supermario-v2 is a merged model based on Mistral-7B-v0.1, combining three different models using the DARE_TIES method, with strong text generation capabilities.
Large Language Model Transformers English
S
jan-hq
77
8
Openhermes 2.5 Neural Chat 7b V3 2 7B
Apache-2.0
This model is created by merging OpenHermes-2.5-Mistral-7B and Intel's neural-chat-7b-v3-2 using the ties merging method, focusing on text generation tasks.
Large Language Model Transformers
O
Weyaxi
462
26
Misted 7B
Apache-2.0
Misted-7B is a 7B-parameter large language model based on the fusion of OpenHermes-2-Mistral-7B and Mistral-7B-SlimOrca, primarily designed for text generation tasks.
Large Language Model Transformers English
M
Walmart-the-bag
386
8
Speechless Llama2 Luban Orca Platypus 13b
This model is a merge of AIDC-ai-business/Luban-13B and Open-Orca/OpenOrca-Platypus2-13B, forming a 13-billion-parameter large language model based on the Llama 2 architecture.
Large Language Model Transformers English
S
uukuguy
94
4
Mvp
Apache-2.0
MVP is a multitask supervised pretraining model based on the Transformer architecture, specifically designed for natural language generation tasks.
Large Language Model Transformers Supports Multiple Languages
M
RUCAIBox
6,146
7
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase